Goto

Collaborating Authors

 continuous tempering


Langevin Dynamics with Continuous Tempering for Training Deep Neural Networks

Neural Information Processing Systems

Minimizing non-convex and high-dimensional objective functions is challenging, especially when training modern deep neural networks. In this paper, a novel approach is proposed which divides the training process into two consecutive phases to obtain better generalization performance: Bayesian sampling and stochastic optimization. The first phase is to explore the energy landscape and to capture the `fat'' modes; and the second one is to fine-tune the parameter learned from the first phase. In the Bayesian learning phase, we apply continuous tempering and stochastic approximation into the Langevin dynamics to create an efficient and effective sampler, in which the temperature is adjusted automatically according to the designed ``temperature dynamics''. These strategies can overcome the challenge of early trapping into bad local minima and have achieved remarkable improvements in various types of neural networks as shown in our theoretical analysis and empirical experiments.




Reviews: Langevin Dynamics with Continuous Tempering for Training Deep Neural Networks

Neural Information Processing Systems

The paper proposes a new method to optimize deep neural networks, starting with a stochastic search using'original' Langevin (where the temperature appears as a function of an auxiliary variable), then transitioning to more classical, deterministic algorithm. I enjoyed reading the paper - I am not an expert in the field but as far as I could tell the methods are novel, and the idea of treating the temperature as a function of an augmented variable seems elegant; since one can then change the landscape for temperature (tweaking g(\alpha) and \phi(\alpha)) without changing the optimum of the function. The numerical experiments seem to indicate that the method is not more computationally demand but improves optimization. I recommend acceptance, with minor caveats below. However they don't explicitly investigate the ability of the algorithm to jump between modes, a property frequently mentioned in the body of the text.


Langevin Dynamics with Continuous Tempering for Training Deep Neural Networks

Ye, Nanyang, Zhu, Zhanxing, Mantiuk, Rafal

Neural Information Processing Systems

Minimizing non-convex and high-dimensional objective functions is challenging, especially when training modern deep neural networks. In this paper, a novel approach is proposed which divides the training process into two consecutive phases to obtain better generalization performance: Bayesian sampling and stochastic optimization. The first phase is to explore the energy landscape and to capture the fat'' modes; and the second one is to fine-tune the parameter learned from the first phase. In the Bayesian learning phase, we apply continuous tempering and stochastic approximation into the Langevin dynamics to create an efficient and effective sampler, in which the temperature is adjusted automatically according to the designed temperature dynamics''. These strategies can overcome the challenge of early trapping into bad local minima and have achieved remarkable improvements in various types of neural networks as shown in our theoretical analysis and empirical experiments.